Astrophysics : Worked Data Reduction Process
This page last changed on Dec 02, 2012 by rp7772.
The data reduction process is important for photometry as the contributions from dark current, bias and flat field alter the number of counts in a frame and so correcting for these may not present a significant visual difference (as is the case with Fig. 1 on the right), however a calculation of flux will be more accurate. Step-by-step:The general process for data reduction is outlined below with the aid of exaggerated mock results for illustrative purposes (image of M42 taken by the Hubble Space Telescope. Credit: NASA/ESA).
Stacking:In some situations (particularly when observing extended objects) the object you wish to image is much fainter than another bright object in the field of view or the object just has a very large range of intensities. This raises issues because the brighter part of the image may saturate quickly leaving a very low S/N ratio in the fainter areas. It is when this occurs that stacking becomes useful. By taking multiple exposures of maximum length before saturation occurs, aligning each one (this is called registering) and then adding the images together, one can acquire a much longer exposure image thereby enhancing detail in fainter regions. In pseudo-code, the process would look as follows: FIND x-y coords of brightest star for ALL FILES Width diff = max x-coord - min x-coord Height diff = max y-coord - min y-coord Make 3d array [image width + width diff, image height + height diff, number of files to stack] FOR n=1 to number of files BEGIN 3D array[*,*,file number] = file 'n' offset by (max x-y coords - star x-y coords for file 'n') ENDFOR SUM 3d array over 3rd dimension
![]() ![]() ![]() ![]() ![]() ![]() ![]() ![]() ![]() |
![]() |
Document generated by Confluence on Jun 12, 2013 09:50 |